Quantum Inspired Training for Boltzmann Machines
نویسندگان
چکیده
We present an efficient classical algorithm for training deep Boltzmann machines (DBMs) that uses rejection sampling in concert with variational approximations to estimate the gradients of the training objective function. Our algorithm is inspired by a recent quantum algorithm for training DBMs [1]. We obtain rigorous bounds on the errors in the approximate gradients; in turn, we find that choosing the instrumental distribution to minimize the α = 2 divergence with the Gibbs state minimizes the asymptotic algorithmic complexity. Our rejection sampling approach can yield more accurate gradients than low-order contrastive divergence training and the costs incurred in finding increasingly accurate gradients can be easily parallelized. Finally our algorithm can train full Boltzmann machines and scales more favorably with the number of layers in a DBM than greedy contrastive divergence training.
منابع مشابه
Unsupervised Generative Modeling Using Matrix Product States
Generative modeling, which learns joint probability distribution from training data and generates samples according to it, is an important task in machine learning and artificial intelligence. Inspired by probabilistic interpretation of quantum physics, we propose a generative model using matrix product states, which is a tensor network originally proposed for describing (particularly one-dimen...
متن کاملBenchmarking Quantum Hardware for Training of Fully Visible Boltzmann Machines
Quantum annealing (QA) is a hardware-based heuristic optimization and sampling method applicable to discrete undirected graphical models. While similar to simulated annealing, QA relies on quantum, rather than thermal, effects to explore complex search spaces. For many classes of problems, QA is known to offer computational advantages over simulated annealing. Here we report on the ability of r...
متن کاملReinforcement learning using quantum Boltzmann machines
We investigate whether quantum annealers with select chip layouts can outperform classical computers in reinforcement learning tasks. We associate a transverse field Ising spin Hamiltonian with a layout of qubits similar to that of a deep Boltzmann machine (DBM) and use simulated quantum annealing (SQA) to numerically simulate quantum sampling from this system. We design a reinforcement learnin...
متن کاملLeveraging Adiabatic Quantum Computation for Election Forecasting
Accurate, reliable sampling from fully-connected graphs with arbitrary correlations is a difficult problem. Such sampling requires knowledge of the probabilities of observing every possible state of a graph. As graph size grows, the number of model states becomes intractably large and efficient computation requires full sampling be replaced with heuristics and algorithms that are only approxima...
متن کاملAccelerating Deep Learning with Memcomputing
Restricted Boltzmann machines (RBMs) and their extensions, often called “deep-belief networks”, are very powerful neural networks that have found widespread applicability in the fields of machine learning and big data. The standard way to training these models resorts to an iterative unsupervised procedure based on Gibbs sampling, called “contrastive divergence”, and additional supervised tunin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1507.02642 شماره
صفحات -
تاریخ انتشار 2015